33 research outputs found

    Distributed and Collaborative Test Scheduling to Determine a Green Build

    Get PDF
    In the parlance of software testing and verification, a green build is a software build that passes tests on all reference devices. A green build is typically determined by a centralized test-scheduler. The centralized test-scheduler has a database of parameters, e.g., build-artifacts, build-branches, etc., corresponding to each device. The centralized scheduler uses the database to efficiently schedule tests. Centralized scheduling is computationally intensive, and maintenance of the database is a significant burden. Per the techniques of this disclosure, devices in a pool collaboratively pick a new build to test. The first device to start within a given scheduling interval picks a build, and the remaining devices pick the same build. The devices independently test the selected build. The first device to finish testing, either due to pass or fail, picks another build. The remaining devices follow the newly picked build. The process continues until the devices converge upon a green build. The distributed manner of test scheduling, as described herein, enables efficient determination of the green build

    Utilizing a Human Computer Interaction Technique for Enabling Non-Disruptive Exploration of App Contents and Capabilities in a Query Recommendation System

    Get PDF
    Conventional techniques for launching apps do not provide any facility to quickly launch app contents or app capabilities. This disclosure describes techniques for quickly launching app capabilities and contents, surfacing those capabilities/contents that result in high user interaction rate (UIR) without harming a total clicks metric. In contrast to conventional techniques, app contents and capabilities with high potential UIR are adaptively determined using heuristics and user-permitted interaction data. A quick scroll button advantageously enables providing a scroll interface with suggestions that are otherwise hidden behind a virtual keyboard or not displayed. App contents and capabilities with high potential UIR are determined with low computational and UI costs. By directly enabling the scrolling and selection of relevant, popular, or personalized app content and capabilities, the user interface provides enhanced convenience and speed of operation

    Automatic Triggering of Contextual Suggestions for Web Pages

    Get PDF
    This disclosure describes techniques for automatic triggering of contextual suggestions based on web page content via a web browser. With user permission, the webpage content, including the uniform resource locator (URL) and page content are analyzed to determine whether to trigger contextual suggestions. For example, a simple web page may be analyzed using a client-side script. A user interface element with a suggestion is displayed if the analysis indicates that a suggestion is appropriate. For example, a suggestion to add ingredients to the user’s shopping cart may be triggered when a user views a recipe web page. The suggestion may be overlaid on the web page in the browser user interface. Appropriate thresholds are used to trigger only such suggestions that are helpful

    On-Device Query and Metadata Caching for Expedited Inference and Rendering of Answer Cards

    Get PDF
    This publication describes methods and techniques for mobile devices to enhance the user experience associated with search applications. In aspects, an Intent Manager associates a query suggestion to a corresponding answer card (e.g., search result) for a given query text. The association may involve the Intent Manager identifying that the submission of a query suggestion results in a search application presenting one or more corresponding answer cards. Further, the association may include the production of metadata relating the query text to the submitted query suggestion. The Intent Manager can then cache the metadata on the device. In doing so, when a user performs an identical search at a later time, the Intent Manager can search through the cached metadata for a query text, find the query text, and extract the related query suggestion from the metadata. Using the related query suggestion as the search criteria, the search application can simultaneously present a query suggestion and its corresponding answer card(s) to a user during a search

    On-device Query Caching For Enhancing Zero-Prefix Query Suggestions

    Get PDF
    User interfaces (UI) that provide search functionality, e.g., search boxes, virtual assistants, etc. often include mechanisms that provide users with query suggestions within the UI. Query suggestions presented prior to receiving any input from the user are referred to as zero-prefix query suggestions. Zero-prefix query suggestions are typically derived by a ranking algorithm that is based on recently submitted and/or recurrent queries, accessed from a user-permitted server-side query cache. However, resource and operational constraints of a server cache can result in suboptimal zero-prefix query suggestions. This disclosure describes the implementation of a local on-device cache to overcome these limitations and improve the relevance and effectiveness of zero-prefix query suggestions

    Automatic Delivery of Machine Learning Models to User Device to Enable App Features

    Get PDF
    Many mobile apps, e.g., virtual assistants, navigation apps, video apps, etc., use various machine learning (ML) models. Different features of the app may have respective associated ML models which may often not be available locally on a user device and need explicit user action to download from a server. This disclosure describes techniques for automatic synchronization of machine learning models to a user device. With user permission, ML model(s) of an app that are not available locally or for which a new version is available are automatically downloaded when certain conditions such as are met. The conditions can include network conditions and other device-specific parameters. The overheads for determining whether the conditions are satisfied are reduced by utilizing an on-device cache and flag, which can reduce the impact on app startup latency

    Real-time scheduling for software testing

    Get PDF
    Software is subjected to a large variety of tests on different devices prior to public release, including pre-submit and post-submit tests, that have different scheduling attributes. The number of physical devices available for testing is limited, e.g., due to the devices being unreleased or under development. This disclosure provides techniques for scheduling software tests on a scarce pool of physical devices. The tests have differing scheduling attributes, e.g., durations, periodicity, degree of preemptiveness, etc. Tests are scheduled such that a test experiences low latency (queueing delay) regardless of test duration, the availability of the device pool is improved, and device underutilization/ instances of overloading are reduced. The techniques improve test scheduling and can enable improved engineering and application-developer productivity and can help reduce time to market for new devices

    Providing Entity Information Within A Video Player App

    Get PDF
    Video creators that post videos to online video hosting websites or social media can manually add associated links for the video. When a viewer clicks on an associated link, the viewer is redirected from the video app to another app thereby disrupting the video experience. This disclosure describes machine learning techniques to automatically surface associated links for a video within a video player app and to automatically make curated content about entities in the video available for the user to explore within the video player app. By displaying entity information within the video app while video playback continues, the user is provided a seamless experience that does not disrupt the video watching experience

    From experiment to design – fault characterization and detection in parallel computer systems using computational accelerators

    Get PDF
    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads

    Fuzz testing of smartphones and IoT devices

    Get PDF
    Fuzz testing is an effective technique for finding software vulnerabilities. Fuzzing works by feeding quasi-random, auto-generated input sequences to a target program and searching for failures. When used to test physical devices, fuzzing is found to occasionally brick the devices, leading to significant testing expenses. Also, while existing kernel fuzzing is effective in finding kernel-interface vulnerabilities, it is not as efficient in finding deeply-hidden vulnerabilities. This disclosure presents an architecture for continuously running fuzz tests at scale on physical devices, including on kernel and hardware abstraction layer (HAL) modules. Multiple fuzzers run parallel tests and collaborate in a decentralized manner. Fuzzers share control flow paths and corresponding code coverages as they are discovered. Fuzzers share syscall sequences that brick devices as they are discovered, and arrive at an efficient set of sequences that maximize test coverage
    corecore